我们提出了高动态范围辐射(HDR)字段,HDR-PLENOXELS,它学习了3D HDR辐射场的肺化功能,几何信息和2D低动态范围(LDR)图像中固有的不同摄像机设置。我们基于体素的卷渲染管道可重建HDR辐射字段,仅以端到端的方式从不同的相机设置中拍摄的多视图LDR图像,并且具有快速的收敛速度。为了在现实世界中处理各种摄像机,我们引入了一个音调映射模块,该模块模拟了数字相机内成像管道(ISP)(ISP)和DISTANGLES辐射测定设置。我们的音调映射模块可以通过控制每个新型视图的辐射设置来渲染。最后,我们构建一个具有不同摄像机条件的多视图数据集,适合我们的问题设置。我们的实验表明,HDR-Plenoxels可以从具有各种相机的LDR图像中表达细节和高质量的HDR新型视图。
translated by 谷歌翻译
变压器编码器架构最近在单眼3D人类网格重建方面取得了最新的结果,但是它们需要大量的参数和昂贵的计算。由于内存较大和推理速度缓慢,因此很难部署此类模型以供实际使用。在本文中,我们提出了一种新型的变压器编码器编码器架构,用于从单个图像(称为FastMetro)重建的3D人网。我们确定基于编码器的变压器中的性能瓶颈是由令牌设计引起的,该设计引入了输入令牌之间的高复杂性相互作用。我们通过编码器解码器体系结构解开交互,这使我们的模型可以要求更少的参数和更短的推理时间。此外,我们通过注意力掩盖和网状升压操作对人体的形态关系提出了先验知识,从而导致更快的融合以更高的准确性。我们的FastMetro提高了准确性和效率的帕累托 - 前面,并且显然超过了基于图像的36m和3dpw的基于图像的方法。此外,我们验证了其对弗莱德人的概括性。
translated by 谷歌翻译
我们提出了夹子演员,这是人类网状动画的文本驱动运动建议和神经网格式化系统。剪贴画将动画3D人类网格通过推荐运动序列和学习网格样式属性来符合文本提示。当艺术家设计的网格内容从一开始就不符合文本时,先前的工作将无法产生合理的结果。取而代之的是,我们通过利用具有语言标签的大规模人类运动数据集来构建文本驱动的人类运动推荐系统。鉴于自然语言提示,剪贴器首先提出了一种人类运动,该运动以粗到精细的方式符合提示。然后,我们提出了一种合成的直接优化方法,该方法从每个帧的姿势中以分离的方式详细详细介绍了建议的网格序列。它允许样式属性以时间一致和姿势不合时宜的方式符合提示。脱钩的神经优化还可以使人类运动的时空视图增强。我们进一步提出了掩盖加权的嵌入注意力,该嵌入的注意力通过拒绝含有稀缺前景像素的分心渲染来稳定优化过程。我们证明剪贴器会产生合理的和人类识别的样式3D人物,并从自然语言提示中带有详细的几何形状和纹理。
translated by 谷歌翻译
我们提出了一种以弱监督方式培训的人类和四足动物的端到端统一3D网格恢复。与最近的工作仅关注单个目标类别,我们的目标是通过单个多任务模型恢复更广泛类的3D网格。然而,没有存在可以直接启用多任务学习的数据集,因为没有人类对象的人类和动物注释,例如,人类图像没有动物姿势注释;因此,我们必须设计一种利用异构数据集的新方法。为了使不稳定的不相交的多任务学习联合培训,我们建议利用人类和动物之间的形态相似性,通过动物锻炼的动机,人类模仿动物的姿势。我们通过语义对应的形态相似性,称为子关键点,其能够联合训练人和动物网格回归分支。此外,我们提出了类敏感的正则化方法,以避免平均形状的偏差,并提高多级别的独特性。我们的方法在各种人类和动物数据集上对最近的Uni-Modal模型进行了有利的,同时更紧凑。
translated by 谷歌翻译
Generative AI has matured to a point where large-scale models can generate text that seems indistinguishable from human-written text and remarkably photorealistic images. Automatically measuring how close the distribution of generated data is to the target real data distribution is a key step in diagnosing existing models and developing better models. We present MAUVE, a family of comparison measures between pairs of distributions such as those encountered in the generative modeling of text or images. These scores are statistical summaries of divergence frontiers capturing two types of errors in generative modeling. We explore four approaches to statistically estimate these scores: vector quantization, non-parametric estimation, classifier-based estimation, and parametric Gaussian approximations. We provide statistical bounds for the vector quantization approach. Empirically, we find that the proposed scores paired with a range of $f$-divergences and statistical estimation methods can quantify the gaps between the distributions of human-written text and those of modern neural language models by correlating with human judgments and identifying known properties of the generated texts. We conclude the paper by demonstrating its applications to other AI domains and discussing practical recommendations.
translated by 谷歌翻译
We study model-based reinforcement learning (RL) for episodic Markov decision processes (MDP) whose transition probability is parametrized by an unknown transition core with features of state and action. Despite much recent progress in analyzing algorithms in the linear MDP setting, the understanding of more general transition models is very restrictive. In this paper, we establish a provably efficient RL algorithm for the MDP whose state transition is given by a multinomial logistic model. To balance the exploration-exploitation trade-off, we propose an upper confidence bound-based algorithm. We show that our proposed algorithm achieves $\tilde{\mathcal{O}}(d \sqrt{H^3 T})$ regret bound where $d$ is the dimension of the transition core, $H$ is the horizon, and $T$ is the total number of steps. To the best of our knowledge, this is the first model-based RL algorithm with multinomial logistic function approximation with provable guarantees. We also comprehensively evaluate our proposed algorithm numerically and show that it consistently outperforms the existing methods, hence achieving both provable efficiency and practical superior performance.
translated by 谷歌翻译
This work presents a detailed linguistic analysis into why larger Transformer-based pre-trained language models with more parameters and lower perplexity nonetheless yield surprisal estimates that are less predictive of human reading times. First, regression analyses show a strictly monotonic, positive log-linear relationship between perplexity and fit to reading times for the more recently released five GPT-Neo variants and eight OPT variants on two separate datasets, replicating earlier results limited to just GPT-2 (Oh et al., 2022). Subsequently, analysis of residual errors reveals a systematic deviation of the larger variants, such as underpredicting reading times of named entities and making compensatory overpredictions for reading times of function words such as modals and conjunctions. These results suggest that the propensity of larger Transformer-based models to 'memorize' sequences during training makes their surprisal estimates diverge from humanlike expectations, which warrants caution in using pre-trained language models to study human language processing.
translated by 谷歌翻译
Generalisation to unseen contexts remains a challenge for embodied navigation agents. In the context of semantic audio-visual navigation (SAVi) tasks, the notion of generalisation should include both generalising to unseen indoor visual scenes as well as generalising to unheard sounding objects. However, previous SAVi task definitions do not include evaluation conditions on truly novel sounding objects, resorting instead to evaluating agents on unheard sound clips of known objects; meanwhile, previous SAVi methods do not include explicit mechanisms for incorporating domain knowledge about object and region semantics. These weaknesses limit the development and assessment of models' abilities to generalise their learned experience. In this work, we introduce the use of knowledge-driven scene priors in the semantic audio-visual embodied navigation task: we combine semantic information from our novel knowledge graph that encodes object-region relations, spatial knowledge from dual Graph Encoder Networks, and background knowledge from a series of pre-training tasks -- all within a reinforcement learning framework for audio-visual navigation. We also define a new audio-visual navigation sub-task, where agents are evaluated on novel sounding objects, as opposed to unheard clips of known objects. We show improvements over strong baselines in generalisation to unseen regions and novel sounding objects, within the Habitat-Matterport3D simulation environment, under the SoundSpaces task.
translated by 谷歌翻译
Transformer-based large language models are trained to make predictions about the next word by aggregating representations of previous tokens through their self-attention mechanism. In the field of cognitive modeling, such attention patterns have recently been interpreted as embodying the process of cue-based retrieval, in which attention over multiple targets is taken to generate interference and latency during retrieval. Under this framework, this work first defines an entropy-based predictor that quantifies the diffuseness of self-attention, as well as distance-based predictors that capture the incremental change in attention patterns across timesteps. Moreover, following recent studies that question the informativeness of attention weights, we also experiment with alternative methods for incorporating vector norms into attention weights. Regression experiments using predictors calculated from the GPT-2 language model show that these predictors deliver a substantially better fit to held-out self-paced reading and eye-tracking data over a rigorous baseline including GPT-2 surprisal. Additionally, the distance-based predictors generally demonstrated higher predictive power, with effect sizes of up to 6.59 ms per standard deviation on self-paced reading times (compared to 2.82 ms for surprisal) and 1.05 ms per standard deviation on eye-gaze durations (compared to 3.81 ms for surprisal).
translated by 谷歌翻译
Task-oriented dialogue (TOD) systems are mainly based on the slot-filling-based TOD (SF-TOD) framework, in which dialogues are broken down into smaller, controllable units (i.e., slots) to fulfill a specific task. A series of approaches based on this framework achieved remarkable success on various TOD benchmarks. However, we argue that the current TOD benchmarks are limited to surrogate real-world scenarios and that the current TOD models are still a long way from unraveling the scenarios. In this position paper, we first identify current status and limitations of SF-TOD systems. After that, we explore the WebTOD framework, the alternative direction for building a scalable TOD system when a web/mobile interface is available. In WebTOD, the dialogue system learns how to understand the web/mobile interface that the human agent interacts with, powered by a large-scale language model.
translated by 谷歌翻译